Constituents often fail to hold their representatives accountable for federal spending decisions-even though those very choices have a pervasive influence on American life. Why does this happen? Breaking new ground in the study of representation, The Impression of Influence demonstrates how legislators skillfully inform constituents with strategic communication and how this facilitates or undermines accountability. Using a massive collection of Congressional texts and innovative experiments and methods, the book shows how legislators create an impression of influence through credit claiming messages.
"This book demonstrates the consequences of legislators' strategic communication for representation in American politics. Representational Style in Congress shows how legislators present their work to cultivate constituent support. Using a massive new data set of texts from legislators and new statistical techniques to analyze the texts, this book provides comprehensive measures of what legislators say to constituents and explains why legislators adopt these styles. Using the new measures, Justin Grimmer shows how legislators affect how constituents evaluate their representatives and the consequences of strategic statements for political discourse. The introduction of new statistical techniques for political texts allows a more comprehensive and systematic analysis of what legislators say and why it matters than was previously possible. Using these new techniques, the book makes the compelling case that to understand political representation, we must understand what legislators say to constituents"--
Information is being produced and stored at an unprecedented rate. The promise of the 'big data' revolution is that in these data are the answers to fundamental questions of businesses, governments, and social sciences. Many of the most boisterous claims come from computational fields, which have little experience with the difficulty of social scientific inquiry. As social scientists, we may reassure ourselves that we know better. This statement is true; 'big data' alone is insufficient for solving society's most pressing problems-but it certainly can help. This paper argues that big data provides the opportunity to learn about quantities that were infeasible only a few years ago. The opportunity for descriptive inference creates the chance for political scientists to ask causal questions and create new theories that previously would have been impossible (Monroe et al. 2015). Furthermore, when paired with experiments or robust research designs, 'big data' can provide data-driven answers to vexing questions. Moreover, combining the social scientific research designs makes the utility of large datasets even more potent. The analysis of big data, then, is not only a matter of solving computational problems-even if those working on big data in industry primarily come from the natural sciences or computational fields. Rather, expertly analyzing big data also requires thoughtful measurement (Patty and Penn 2015), careful research design, and the creative deployment of statistical techniques. For the analysis of big data to truly yield answers to society's biggest problems, we must recognize that it is as much about social science as it is about computer science. Adapted from the source document.
Congressional districts create two levels of representation. Studies of representation focus on a disaggregated level: the electoral connection between representatives and constituents. But there is a collective level of representation—the result of aggregating across representatives. This article uses new measures of home styles to demonstrate that responsiveness to constituents can have negative consequences for collective representation. The electoral connection causes marginal representatives—legislators with districts composed of the other party's partisans—to emphasize appropriations in their home styles. But it causes aligned representatives—those with districts filled with copartisans—to build their home styles around position taking. Aggregated across representatives, this results in an artificial polarization in stated party positions: aligned representatives, who tend to be ideologically extreme, dominate policy debates. The logic and evidence in this article provide an explanation for the apparent rise in vitriolic debate, and the new measures facilitate a literature on home styles.
In: Political analysis: PA ; the official journal of the Society for Political Methodology and the Political Methodology Section of the American Political Science Association, Band 19, Heft 1, S. 32-47
Markov chain Monte Carlo (MCMC) methods have facilitated an explosion of interest in Bayesian methods. MCMC is an incredibly useful and important tool but can face difficulties when used to estimate complex posteriors or models applied to large data sets. In this paper, we show how a recently developed tool in computer science for fitting Bayesian models, variational approximations, can be used to facilitate the application of Bayesian models to political science data. Variational approximations are often much faster than MCMC for fully Bayesian inference and in some instances facilitate the estimation of models that would be otherwise impossible to estimate. As a deterministic posterior approximation method, variational approximations are guaranteed to converge and convergence is easily assessed. But variational approximations do have some limitations, which we detail below. Therefore, variational approximations are best suited to problems when fully Bayesian inference would otherwise be impossible. Through a series of examples, we demonstrate how variational approximations are useful for a variety of political science research. This includes models to describe legislative voting blocs and statistical models for political texts. The code that implements the models in this paper is available in the supplementary material.
In: Political analysis: official journal of the Society for Political Methodology, the Political Methodology Section of the American Political Science Association, Band 19, Heft 1, S. 32-32
In: Political analysis: PA ; the official journal of the Society for Political Methodology and the Political Methodology Section of the American Political Science Association, Band 18, Heft 1, S. 1-35
Political scientists lack methods to efficiently measure the priorities political actors emphasize in statements. To address this limitation, I introduce a statistical model that attends to the structure of political rhetoric when measuring expressed priorities: statements are naturally organized by author. The expressed agenda model exploits this structure to simultaneously estimate the topics in the texts, as well as the attention political actors allocate to the estimated topics. I apply the method to a collection of over 24,000 press releases from senators from 2007, which I demonstrate is an ideal medium to measure how senators explain their work in Washington to constituents. A set of examples validates the estimated priorities and demonstrates their usefulness for testing theories of how members of Congress communicate with constituents. The statistical model and its extensions will be made available in a forthcoming free software package for the R computing language.
In: Political analysis: official journal of the Society for Political Methodology, the Political Methodology Section of the American Political Science Association, Band 18, Heft 1, S. 1-1
AbstractSocial scientists are interested in the effects of low‐dimensional latent treatments within texts, such as the effect of an attack on a candidate in a political advertisement. We provide a framework for causal inference with latent treatments in high‐dimensional interventions. Using this framework, we show that the randomization of texts alone is insufficient to identify the causal effects of latent treatments, because other unmeasured treatments in the text could confound the measured treatment's effect. We provide a set of assumptions that is sufficient to identify the effect of latent treatments and a set of strategies to make these assumptions more plausible, including explicitly adjusting for potentially confounding text features and nontraditional experimental designs involving many versions of the text. We apply our framework to a survey experiment and an observational study, demonstrating how our framework makes text‐based causal inferences more credible.
AbstractAn increasing number of states have adopted laws that require voters to show photo identification to vote. We show that the differential effect of the laws on turnout among those who lack ID persists even after the laws are repealed. We leverage administrative data from North Carolina and a photo ID law in effect for a primary, but not the subsequent general, election. Using exact matching and a difference-in-differences design, we show that for the 3 percent of voters who lack ID in North Carolina, the ID law caused a 0.7 percentage point turnout decrease in the 2016 primary election relative to those with ID. After the law was suspended, this effect persisted: those without ID were 2.6 percentage points less likely to turnout in the 2016 general election and 1.7 percentage points less likely to turnout in the 2018 general.
AbstractUsing data from the World Values Survey, we analyze the extent to which value consensus exists within countries. To do this, we introduce a statistical model which allows us to generate country-level measures of cultural heterogeneity. Our statistical approach models each country as a mixture of subcultures that are shared across the world. Our results demonstrate that value consensus varies substantially across countries and regions.